Search results for "Depth map"

showing 10 items of 19 documents

Visual contact with catadioptric cameras

2015

Abstract Time to contact or time to collision (TTC) is utmost important information for animals as well as for mobile robots because it enables them to avoid obstacles; it is a convenient way to analyze the surrounding environment. The problem of TTC estimation is largely discussed in perspective images. Although a lot of works have shown the interest of omnidirectional camera for robotic applications such as localization, motion, monitoring, few works use omnidirectional images to compute the TTC. In this paper, we show that TTC can be also estimated on catadioptric images. We present two approaches for TTC estimation using directly or indirectly the optical flow based on de-rotation strat…

0209 industrial biotechnologyComputer scienceGeneral MathematicsComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONOptical flow02 engineering and technologyCatadioptric system020901 industrial engineering & automationOmnidirectional cameraDepth map0202 electrical engineering electronic engineering information engineering[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO]Computer visionComputingMilieux_MISCELLANEOUSPixelbusiness.industryPerspective (graphical)[ INFO.INFO-RB ] Computer Science [cs]/Robotics [cs.RO]Mobile robotReal imageComputer Science ApplicationsControl and Systems EngineeringObstacle020201 artificial intelligence & image processingArtificial intelligencebusinessSoftware
researchProduct

Three-Dimensional Integral-Imaging Display From Calibrated and Depth-Hole Filtered Kinect Information

2016

We exploit the Kinect capacity of picking up a dense depth map, to display static three-dimensional (3D) images with full parallax. This is done by using the IR and RGB camera of the Kinect. From the depth map and RGB information, we are able to obtain an integral image after projecting the information through a virtual pinhole array. The integral image is displayed on our integral-imaging monitor, which provides the observer with horizontal and vertical perspectives of big 3D scenes. But, due to the Kinect depth-acquisition procedure, many depthless regions appear in the captured depth map. These holes spread to the generated integral image, reducing its quality. To solve this drawback we …

0209 industrial biotechnologyIntegral imagingbusiness.industryComputer scienceComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION02 engineering and technologyCondensed Matter PhysicsStereo display01 natural sciencesElectronic Optical and Magnetic Materials010309 optics020901 industrial engineering & automationDepth mapCamera auto-calibrationComputer graphics (images)0103 physical sciencesRGB color modelComputer visionBilateral filterArtificial intelligenceElectrical and Electronic EngineeringbusinessParallaxComputingMethodologies_COMPUTERGRAPHICSCamera resectioningJournal of Display Technology
researchProduct

Augmented Mirror: Interactive Augmented Reality System Based on Kinect

2011

Part 1: Long and Short Papers; International audience; In this paper we present a virtual character controlled by an actor in real time, who talks with an audience through an augmented mirror. The application, which integrates video images, the avatar and other virtual objects within an Augmented Reality system, has been implemented using a mixture of technologies: two kinect systems for motion capture, depth map and real images, a gyroscope to detect head movements, and control algorithms to manage avatar emotions.

Augmented Realitybusiness.industryComputer scienceComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION020207 software engineering02 engineering and technologyVirtual CharactersComputer-mediated realityReal imageMotion captureMixed realityDepth mapMotion CaptureComputer graphics (images)0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingComputer visionAugmented reality[INFO]Computer Science [cs]Artificial intelligencebusiness
researchProduct

Area-Based Depth Estimation for Monochromatic Feature-Sparse Orthographic Capture

2018

With the rapid development of light field technology, depth estimation has been highlighted as one of the critical problems in the field, and a number of approaches have been proposed to extract the depth of the scene. However, depth estimation by stereo matching becomes difficult and unreliable when the captured images lack both color and feature information. In this paper, we propose a scheme that extracts robust depth from monochromatic, feature-sparse scenes recorded in orthographic sub-aperture images. Unlike approaches which rely on the rich color and texture information across the sub-aperture views, our approach is based on depth from focus techniques. First, we superimpose shifted …

Computer scienceFeature extractionComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONContext (language use)02 engineering and technologyimatges processamentDepth map0202 electrical engineering electronic engineering information engineeringorthographic viewsComputer visionComputingMethodologies_COMPUTERGRAPHICSSignal processingComputer Sciencesbusiness.industryOrthographic projectionmicroscòpia020207 software engineeringintegral imagingDatavetenskap (datalogi)Feature (computer vision)depth from focusComputer Science::Computer Vision and Pattern Recognition020201 artificial intelligence & image processingMonochromatic colorArtificial intelligenceDepth estimationbusinessFocus (optics)Light field2018 26th European Signal Processing Conference (EUSIPCO)
researchProduct

New Method of Microimages Generation for 3D Display

2018

In this paper, we propose a new method for the generation of microimages, which processes real 3D scenes captured with any method that permits the extraction of its depth information. The depth map of the scene, together with its color information, is used to create a point cloud. A set of elemental images of this point cloud is captured synthetically and from it the microimages are computed. The main feature of this method is that the reference plane of displayed images can be set at will, while the empty pixels are avoided. Another advantage of the method is that the center point of displayed images and also their scale and field of view can be set. To show the final results, a 3D InI dis…

Computer sciencePoint cloud3D displaylcsh:Chemical technologyStereo display01 natural sciencesBiochemistryArticleAnalytical Chemistry010309 opticsSet (abstract data type)Depth mapplenoptic imaging0103 physical scienceslcsh:TP1-1185Point (geometry)Computer visionElectrical and Electronic EngineeringInstrumentation010302 applied physicsMicrolensIntegral imagingPixelbusiness.industryÒpticaAtomic and Molecular Physics and Opticsintegral imagingFeature (computer vision)Artificial intelligenceCGIIbusinessImatges Processament Tècniques digitalsSensors
researchProduct

Real-Time 3D Face Acquisition Using Reconfigurable Hybrid Architecture

2007

Acquiring 3D data of human face is a general problem which can be applied in face recognition, virtual reality, and many other applications. It can be solved using stereovision. This technique consists in acquiring data in three dimensions from two cameras. The aim is to implement an algorithmic chain which makes it possible to obtain a three-dimensional space from two two-dimensional spaces: two images coming from the two cameras. Several implementations have already been considered. We propose a new simple real-time implementation based on a hybrid architecture (FPGA-DSP), allowing to consider an embedded and reconfigurable processing. Then we show our method which provides depth map of …

Data processingComputer scienceReal-time computinglcsh:Electronicslcsh:TK7800-8360Virtual realityMetaverseFacial recognition systemComputer engineeringDepth mapFace (geometry)Pattern recognition (psychology)Signal ProcessingReference architectureElectrical and Electronic EngineeringInformation SystemsEURASIP Journal on Image and Video Processing
researchProduct

Real metrology by using depth map information

2004

Usually in an image no real information about the scene’s depth (in terms of absolute distance) is available. In this paper, a method that extracts real depth measures is developed. This approach starts considering a region located in the center of the depth map. This region can be positioned, interactively, in any part of the depth map in order to measure the real distance of every object inside the scene. The histogram local maxima of this region are determined. Among these values the biggest, that represents the gray-level of the most considerable object, is chosen. This gray-level is used in an exponential mapping function that converts, using the input camera settings, the depth map gr…

Depth from defocusComputer sciencebusiness.industryComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONFunction (mathematics)Object (computer science)Measure (mathematics)Image (mathematics)MetrologyDepth mapComputer Science::Computer Vision and Pattern RecognitionHistogramComputer visionArtificial intelligencebusiness
researchProduct

Depth Map Generation by Image Classification

2004

This paper presents a novel and fully automatic technique to estimate depth information from a single input image. The proposed method is based on a new image classification technique able to classify digital images (also in Bayer pattern format) as indoor, outdoor with geometric elements or outdoor without geometric elements. Using the information collected in the classification step a suitable depth map is estimated. The proposed technique is fully unsupervised and is able to generate depth map from a single view of the scene, requiring low computational resources.

Digital imageBayer filterContextual image classificationDepth mapbusiness.industryComputer scienceColor imageComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONDigital imagingComputer visionArtificial intelligenceImage segmentationbusiness
researchProduct

3D image acquisition system based on shape from focus technique

2013

agent Agrosup Dijon de l'UMREcolDurGEAPSI; This paper describes the design of a 3D image acquisition system dedicated to natural complex scenes composed of randomly distributed objects with spatial discontinuities. In agronomic sciences, the 3D acquisition of natural scene is difficult due to the complex nature of the scenes. Our system is based on the Shape from Focus technique initially used in the microscopic domain. We propose to adapt this technique to the macroscopic domain and we detail the system as well as the image processing used to perform such technique. The Shape from Focus technique is a monocular and passive 3D acquisition method that resolves the occlusion problem affecting…

Engineering[ INFO.INFO-TS ] Computer Science [cs]/Signal and Image Processing[SDV]Life Sciences [q-bio]ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONscenesImage processingagronomic scenes[ SPI.SIGNAL ] Engineering Sciences [physics]/Signal and Image processing02 engineering and technologyClassification of discontinuitieslcsh:Chemical technologyBiochemistryArticleAnalytical ChemistryDomain (software engineering)shape from focusDepth map0202 electrical engineering electronic engineering information engineeringagronomiclcsh:TP1-1185Computer vision3D image acquisition system;shape from focus;focus measure;agronomic;scenesDepth of fieldElectrical and Electronic EngineeringInstrumentationComputingMethodologies_COMPUTERGRAPHICS3D image acquisition systemfocus measureMonocular[ SDV ] Life Sciences [q-bio]business.industry3D image acquisition system; shape from focus; focus measure; agronomic scenesScene statisticsDistributed object021001 nanoscience & nanotechnologyAtomic and Molecular Physics and Optics020201 artificial intelligence & image processingArtificial intelligence0210 nano-technologybusiness
researchProduct

P2D: a self-supervised method for depth estimation from polarimetry

2021

Monocular depth estimation is a recurring subject in the field of computer vision. Its ability to describe scenes via a depth map while reducing the constraints related to the formulation of perspective geometry tends to favor its use. However, despite the constant improvement of algorithms, most methods exploit only colorimetric information. Consequently, robustness to events to which the modality is not sensitive to, like specularity or transparency, is neglected. In response to this phenomenon, we propose using polarimetry as an input for a self-supervised monodepth network. Therefore, we propose exploiting polarization cues to encourage accurate reconstruction of scenes. Furthermore, we…

FOS: Computer and information sciences0209 industrial biotechnologyMonocularComputer sciencebusiness.industryComputer Vision and Pattern Recognition (cs.CV)PolarimetryComputer Science - Computer Vision and Pattern RecognitionComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]02 engineering and technology010501 environmental sciences01 natural sciencesRegularization (mathematics)Term (time)020901 industrial engineering & automation[INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]SpecularityRobustness (computer science)Depth mapComputer visionArtificial intelligenceTransparency (data compression)business0105 earth and related environmental sciences
researchProduct